55 research outputs found

    Building detection in very high resolution multispectral data with deep learning features

    Get PDF
    International audienceThe automated man-made object detection and building extraction from single satellite images is, still, one of the most challenging tasks for various urban planning and monitoring engineering applications. To this end, in this paper we propose an automated building detection framework from very high resolution remote sensing data based on deep convolu-tional neural networks. The core of the developed method is based on a supervised classification procedure employing a very large training dataset. An MRF model is then responsible for obtaining the optimal labels regarding the detection of scene buildings. The experimental results and the performed quantitative validation indicate the quite promising potentials of the developed approach

    Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?

    Full text link
    Convolutional networks and vision transformers have different forms of pairwise interactions, pooling across layers and pooling at the end of the network. Does the latter really need to be different? As a by-product of pooling, vision transformers provide spatial attention for free, but this is most often of low quality unless self-supervised, which is not well studied. Is supervision really the problem? In this work, we develop a generic pooling framework and then we formulate a number of existing methods as instantiations. By discussing the properties of each group of methods, we derive SimPool, a simple attention-based pooling mechanism as a replacement of the default one for both convolutional and transformer encoders. We find that, whether supervised or self-supervised, this improves performance on pre-training and downstream tasks and provides attention maps delineating object boundaries in all cases. One could thus call SimPool universal. To our knowledge, we are the first to obtain attention maps in supervised transformers of at least as good quality as self-supervised, without explicit losses or modifying the architecture. Code at: https://github.com/billpsomas/simpool.Comment: ICCV 2023. Code and models: https://github.com/billpsomas/simpoo

    It Takes Two to Tango: Mixup for Deep Metric Learning

    Get PDF
    International audienceMetric learning involves learning a discriminative representation such that embeddings of similar classes are encouraged to be close, while embeddings of dissimilar classes are pushed far apart. State-of-the-art methods focus mostly on sophisticated loss functions or mining strategies. On the one hand, metric learning losses consider two or more examples at a time. On the other hand, modern data augmentation methods for classification consider two or more examples at a time. The combination of the two ideas is under-studied. In this work, we aim to bridge this gap and improve representations using mixup, which is a powerful data augmentation approach interpolating two or more examples and corresponding target labels at a time. This task is challenging because unlike classification, the loss functions used in metric learning are not additive over examples, so the idea of interpolating target labels is not straightforward. To the best of our knowledge, we are the first to investigate mixing both examples and target labels for deep metric learning. We develop a generalized formulation that encompasses existing metric learning loss functions and modify it to accommodate for mixup, introducing Metric Mix, or Metrix. We also introduce a new metric - utilization to demonstrate that by mixing examples during training, we are exploring areas of the embedding space beyond the training classes, thereby improving representations. To validate the effect of improved representations, we show that mixing inputs, intermediate representations or embeddings along with target labels significantly outperforms state-of-the-art metric learning methods on four benchmark deep metric learning datasets

    Global assessment of innovative solutions to tackle marine litter

    Get PDF
    AbstractMarine litter is one of the most relevant pollution problems that our oceans are facing today. Marine litter in our oceans is a major threat to a sustainable planet. Here, we provide a comprehensive analysis of cutting-edge solutions developed globally to prevent, monitor and clean marine litter. Prevention in this research includes only innovative solutions to prevent litter entering oceans and seas rather than interventions such as waste reduction and recycling. On the basis of extensive search and data compilation, our analysis reveals that information is dispersed across platforms and is not easily accessible. In total, 177 solutions—the equivalent to <0.9% of the search hits—fulfilled our validation criteria and were evaluated. Most solutions (n = 106, 60%) primarily address monitoring and were developed during the past 3 years, with the scientific community being the key driver. Few solutions reached mature technical readiness and market availability, while none were validated for efficiency and environmental impact. Looking ahead, we elaborate on the limitations of the existing solutions, the challenges of developing new solutions, and provide recommendations for funding schemes and policy instruments to prevent, monitor and clean marine litter globally. In doing so, we encourage researchers, innovators and policy-makers worldwide to act towards achieving and sustaining a cleaner ocean for future generations

    PEERS - an open science “Platform for the Exchange of Experimental Research Standards” in biomedicine

    Get PDF
    Funding The PEERS Consortium is currently funded by Cohen Veterans Bioscience Ltd and grants COH-0011 from Steven A. Cohen. Acknowledgements We would like to thank IJsbrand Jan Aalbersberg, Natasja de Bruin, Philippe Chamiot-Clerc, Anja Gilis, Lieve Heylen, Martine Hofmann, Patricia Kabitzke, Isabel Lefevre, Janko Samardzic, Susanne Schiffmann and Guido Steiner for their valuable input and discussions during the conceptualization of PEERS and the initial phase of the project.Peer reviewedPublisher PD

    Automatic Descriptor-Based Co-Registration of Frame Hyperspectral Data

    No full text
    Frame hyperspectral sensors, in contrast to push-broom or line-scanning ones, produce hyperspectral datasets with, in general, better geometry but with unregistered spectral bands. Being acquired at different instances and due to platform motion and movements (UAVs, aircrafts, etc.), every spectral band is displaced and acquired with a different geometry. The automatic and accurate registration of hyperspectral datasets from frame sensors remains a challenge. Powerful local feature descriptors when computed over the spectrum fail to extract enough correspondences and successfully complete the registration procedure. To this end, we propose a generic and automated framework which decomposes the problem and enables the efficient computation of a sufficient amount of accurate correspondences over the given spectrum, without using any ancillary data (e.g., from GPS/IMU). First, the spectral bands are divided in spectral groups according to their wavelength. The spectral borders of each group are not strict and their formulation allows certain overlaps. The spectral variance and proximity determine the applicability of every spectral band to act as a reference during the registration procedure. The proposed decomposition allows the descriptor and the robust estimation process to deliver numerous inliers. The search space of possible solutions has been effectively narrowed by sorting and selecting the optimal spectral bands which under an unsupervised manner can quickly recover hypercube’s geometry. The developed approach has been qualitatively and quantitatively evaluated with six different datasets obtained by frame sensors onboard aerial platforms and UAVs. Experimental results appear promising
    • 

    corecore